Skip to content

feat: add MiniMax provider support with M2.7 as default#3581

Open
octo-patch wants to merge 2 commits intosimstudioai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support with M2.7 as default#3581
octo-patch wants to merge 2 commits intosimstudioai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 14, 2026

Summary

Add MiniMax as a first-class LLM provider with the latest M2.7 model as default.

Changes

  • Add MiniMax provider with OpenAI-compatible API integration
  • Add MiniMax-M2.7 and MiniMax-M2.7-highspeed as latest models (default)
  • Include MiniMax-M2.5 and MiniMax-M2.5-highspeed as alternatives
  • Temperature clamping to MiniMax-supported range (0.01, 1.0]
  • Streaming support for non-tool requests
  • Tool usage control support
  • Unit tests for provider metadata and request handling

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.

Testing

  • Unit tests updated and passing (10 tests)
  • Integration tested with MiniMax API

- Add MiniMax chat model provider with OpenAI-compatible API
- Support MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
- Add MiniMaxIcon to icons component
- Register provider in types, registry, utils, and models
- Clamp temperature to (0, 1] range per MiniMax API constraints
- Add unit tests for provider metadata and request execution
@cursor
Copy link

cursor bot commented Mar 14, 2026

PR Summary

Medium Risk
Adds a new third-party provider integration with streaming and tool-calling loops; while mostly additive, it introduces new execution paths that could affect request/response handling, token accounting, and tool invocation behavior.

Overview
Adds MiniMax as a first-class LLM provider, including a new minimaxProvider implementation that calls the OpenAI-compatible API at https://api.minimax.io/v1, supports streaming responses, and supports tool calling with forced-tool cycling and timing/cost aggregation.

Registers MiniMax across the system by extending ProviderId, adding it to the provider registry and client-safe provider metadata, and defining MiniMax model listings/pricing/capabilities (including a (0.01..1] temperature constraint). Also adds a MiniMaxIcon and provider unit tests covering key request behaviors (API key requirement, baseURL, temperature clamping, system prompt injection, and token usage).

Written by Cursor Bugbot for commit 1afdde0. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Mar 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Actions Updated (UTC)
docs Skipped Skipped Mar 14, 2026 3:51pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR adds MiniMax as a new OpenAI-compatible LLM provider, following the established pattern used by Groq, DeepSeek, Cerebras, and others. The registration, icon, model definitions, and streaming plumbing are all consistent with existing providers. Two logic bugs in the tool-calling path of apps/sim/providers/minimax/index.ts need to be addressed before merging:

  • Silent error swallowing in the tool loop: The inner try/catch (lines 245–438) logs errors but does not re-throw them. Any failure during a mid-loop model call (e.g., network timeout) silently returns partial results to the executor with no error signal.
  • Orphaned tool_calls entries for unrecognised tools: When the model calls a tool name not present in request.tools, the promise resolves to null and no tool-result message is added to currentMessages. However, the assistant message with the corresponding tool_calls entry has already been pushed. The subsequent API call will fail because every tool_calls entry must be matched by a tool role message with the same tool_call_id.
  • The MiniMax-M2.5-highspeed model pricing ($0.60/M input, $2.40/M output) is 2× more expensive than the base model, which is unusual for a "highspeed" variant — the values should be verified against MiniMax's official pricing documentation.

Confidence Score: 2/5

  • Not safe to merge until the two tool-loop logic bugs are resolved — they will cause silent failures and API errors for any workflow using tool calling with MiniMax.
  • The registration, streaming, and non-tool paths are correctly implemented. However, the inner try-catch silently swallows errors in the tool iteration loop, and unrecognised tool calls produce orphaned assistant messages that will break the API conversation. Both are in the critical hot path for agentic workflows.
  • apps/sim/providers/minimax/index.ts requires the most attention — specifically the inner catch block (line 436) and the null-return path for unknown tools (line 266).

Important Files Changed

Filename Overview
apps/sim/providers/minimax/index.ts Core provider implementation with two logic bugs: silent error swallowing in the tool loop (errors are caught but not re-thrown, returning silent partial results) and orphaned tool_call messages when a tool is not found (will cause API errors on the subsequent model call).
apps/sim/providers/minimax/utils.ts Thin wrapper around the shared createOpenAICompatibleStream utility — correct and minimal.
apps/sim/providers/minimax/index.test.ts 10 unit tests covering metadata, base URL, content return, temperature clamping, system prompt, and token usage. No coverage for tool-calling paths (orphaned tool calls, error propagation, forced-tool cycling).
apps/sim/providers/models.ts Adds minimax provider definition with two models, correct context windows, and temperature caps; MiniMax-M2.5-highspeed pricing is 2× the base model — worth confirming against official docs.
apps/sim/providers/types.ts Adds 'minimax' to the ProviderId union type — straightforward and correct.
apps/sim/providers/registry.ts Registers minimaxProvider in the provider registry — correct and consistent with other provider registrations.
apps/sim/providers/utils.ts Adds minimax: buildProviderMetadata('minimax') to the client-safe provider metadata map — correct and consistent.
apps/sim/components/icons.tsx Adds MiniMaxIcon as a custom SVG icon — valid SVG structure, consistent with other provider icons in the file.

Sequence Diagram

sequenceDiagram
    participant Executor
    participant MiniMaxProvider
    participant OpenAI_Client as OpenAI Client (MiniMax base URL)
    participant ToolExecutor

    Executor->>MiniMaxProvider: executeRequest(request)
    MiniMaxProvider->>OpenAI_Client: chat.completions.create(payload)
    OpenAI_Client-->>MiniMaxProvider: response (may include tool_calls)

    alt Streaming with no tools
        MiniMaxProvider-->>Executor: StreamingExecution (early return)
    else Non-streaming or tools present
        loop Tool iteration (≤ MAX_TOOL_ITERATIONS)
            MiniMaxProvider->>ToolExecutor: executeTool(name, params) [parallel]
            ToolExecutor-->>MiniMaxProvider: tool results
            MiniMaxProvider->>OpenAI_Client: chat.completions.create(next payload)
            OpenAI_Client-->>MiniMaxProvider: next response
        end
        alt Streaming requested (post-tool)
            MiniMaxProvider->>OpenAI_Client: chat.completions.create(stream=true)
            OpenAI_Client-->>MiniMaxProvider: stream
            MiniMaxProvider-->>Executor: StreamingExecution
        else Non-streaming
            MiniMaxProvider-->>Executor: ProviderResponse
        end
    end
Loading

Last reviewed commit: d5d4e40

Comment on lines +436 to +438
} catch (error) {
logger.error('Error in MiniMax request:', { error })
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silent error swallowing breaks error propagation

The inner catch block catches all errors that occur during the tool-calling loop — including network failures on subsequent model calls (e.g., line 387 minimax.chat.completions.create) — but only logs them and lets execution fall through. The caller receives a ProviderResponse with partial/empty toolCalls and no indication that an error occurred, making failures invisible to the executor.

This should re-throw (or wrap in ProviderError) so the outer catch can convert it to a proper ProviderError with timing data:

Suggested change
} catch (error) {
logger.error('Error in MiniMax request:', { error })
}
} catch (error) {
logger.error('Error in MiniMax request:', { error })
throw error
}

Comment on lines +263 to +267
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)

if (!tool) return null

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orphaned tool_calls message when tool is not found

When the model hallucinates a tool name (not present in request.tools), this function returns null. However, the assistant message is unconditionally pushed to currentMessages (lines 303–313) with all tool_calls entries. Any tool call returning null here will not produce a corresponding tool role message.

The MiniMax API (like other OpenAI-compatible APIs) requires every element in tool_calls to have a matching tool role message with the same tool_call_id. Sending mismatched entries will cause an API error on the very next minimax.chat.completions.create call.

A simple fix is to add an error result for unrecognised tool calls rather than returning null:

Suggested change
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) return null
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) {
const toolCallEndTime = Date.now()
return {
toolCall,
toolName,
toolParams: {},
result: {
success: false,
output: undefined,
error: `Tool "${toolName}" not found`,
},
startTime: toolCallStartTime,
endTime: toolCallEndTime,
duration: toolCallEndTime - toolCallStartTime,
}
}

Comment on lines +1214 to +1222
{
id: 'MiniMax-M2.5-highspeed',
pricing: {
input: 0.6,
cachedInput: 0.03,
output: 2.4,
updatedAt: '2025-06-01',
},
capabilities: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verify MiniMax-M2.5-highspeed pricing

MiniMax-M2.5-highspeed is priced at 2× the cost of the base model (input: $0.60/M vs $0.30/M). This is the opposite of what "highspeed" variants typically imply (usually a cheaper, faster version). Please double-check the MiniMax pricing page to confirm the values are not swapped before merging.

<title>MiniMax</title>
<rect width='120' height='120' rx='24' fill='#1A1A2E' />
<path
d='M30 80V40l15 20 15-20v40M70 40v40M80 40l10 20 10-20v40'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SVG icon path draws incomplete second "M" character

Low Severity

The third sub-path M80 40l10 20 10-20v40 in the MiniMaxIcon SVG is missing its left vertical stroke. It traces (80,40)→(90,60)→(100,40)→(100,80), rendering as a V-shape with a trailing line — not an "M". Compare with the first sub-path which correctly starts from the bottom M30 80V40l15 20 15-20v40, drawing the upward left stroke before the V-shape. The third sub-path needs to start at the bottom (e.g. M80 80V40l10 20 10-20v40) to draw a matching "M".

Fix in Cursor Fix in Web

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models as alternatives
- Update related tests
@vercel
Copy link

vercel bot commented Mar 18, 2026

Someone is attempting to deploy a commit to the Sim Team on Vercel.

A member of the Team first needs to authorize it.

@octo-patch octo-patch changed the title feat: add MiniMax provider support feat: add MiniMax provider support with M2.7 as default Mar 18, 2026
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 2 total unresolved issues (including 1 from previous review).

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

fill='none'
/>
</svg>
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MiniMax icon is a hand-drawn placeholder, not official

Low Severity

The MiniMaxIcon is a custom hand-drawn SVG (drawing "MiM" with stroke paths) with a hardcoded dark background rect (fill='#1A1A2E') and hardcoded stroke color (#E94560). Every other provider icon in this file uses either currentColor for theme awareness or official brand SVG paths without opaque background fills. This icon won't adapt to light/dark themes and will visually stand out from all other provider icons. The official MiniMax logo SVG is publicly available (e.g., on LobeHub, Wikimedia Commons).

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant